5 research outputs found

    Greedy nominator heuristic: virtual function placement on fog resources

    Get PDF
    Fog computing is an intermediate infrastructure between edge devices (e.g., Internet of Things) and cloud systems that is used to reduce latency in real-time applications. An application can be composed of a collection of virtual functions, between which dependency constraints can be captured in a service function chain (SFC). Virtual functions within an SFC can be executed at different geo-distributed locations. However, virtual functions are prone to failure and often do not complete within a deadline. This results in function reallocation to other nodes within the infrastructure; causing delays, potential data loss during function migration, and increased costs. We proposed Greedy Nominator Heuristic (GNH) to address these issues. GNH is based on redundant deployment and failure tracking of virtual functions. GNH places replicas of each function at multiple locations—taking account of expected completion time, failure risk, and cost. We make use of a MapReduce-based mechanism, where Mappers find suitable locations in parallel, and a Reducer then ranks these locations. Our results show that GNH reduces latency by up to 68%, and is more cost effective than other approaches which rely on state-of-the-art optimization algorithms to allocate replicas

    Performance analysis of Apache openwhisk across the edge-cloud continuum

    Get PDF
    Serverless computing offers opportunities for auto-scaling, a pay-for-use cost model, quicker deployment and faster updates to support computing services. Apache OpenWhisk is one such open-source, distributed serverless platform that can be used to execute user functions in a stateless manner. We conduct a performance analysis of OpenWhisk on an edge-cloud continuum, using a function chain of video analysis applications. We consider a combination of Raspberry Pi and cloud nodes to deploy OpenWhisk, modifying a number of parameters, such as maximum memory limit and runtime, to investigate application behaviours. The five main factors considered are: cold and warm activation, memory and input size, CPU architecture, runtime packages used, and concurrent invocations. The results have been evaluated using initialization, and execution time, minimum memory requirement, inference time and accuracy

    Adaptive edge-cloud environments for Rural AI

    Get PDF
    Cloud computing provides on-demand access to computational resources while outsourcing infrastructure and service maintenance. Edge computing could extend cloud computing capability to areas with limited computing resources, such as rural areas, by utilizing low-cost hardware, such as singleboard computers. Cloud data centre hosted machine learning algorithms may violate user privacy and data confidentiality requirements. Federated learning (FL) trains models without sending data to a central server and ensures data privacy. Using FL, multiple actors can collaborate on a single machine learning model without sharing data. However, rural network outages can happen at any time, and the quality of a wireless network varies depending on location, which can affect the performance of the Federated Learning application. Therefore there is a need to have a platform that maintains service quality independent of infrastructure status. We propose a self-adaptive system for rural FL, which employs the Greedy Nominator Heuristic (GNH) based optimisation to orchestrate application workflows across multiple resources that make up a rural computing environment. GNH provides distributed optimization for workflow placement. GNH utilises resource status to reduce failure risks and costs while still completing tasks on time. Our approach is validated using a simulated rural environment – composed of multiple decentralized controllers sharing the same infrastructure and running a shared FL application. Results show that GNH outperforms three algorithms for deployment of FL tasks: random plac

    Rural AI: Serverless-powered federated learning for remote applications

    Get PDF
    With increasing connectivity to support digital services in urban areas, there is a realization that demand for offering similar capability in rural communities is still limited. To unlock the potential of Artificial Intelligence (AI) within rural economies, we propose Rural AI—the mobilization of serverless computing to enable AI in austere environments. Inspired by problems observed in New Zealand, we analyze major challenges in agrarian communities and define their requirements. We demonstrate a proof-of-concept Rural AI system for cross-field pasture weed detection that illustrates the capabilities serverless computing offers to traditional federated learning

    A fault tolerant workflow composition and deployment automation IoT framework in a multi cloud edge environment

    No full text
    With rapid increasing of Internet of Things infrastructures including cloud, edge and IoT devices, the importance of system availability, safety, reliability and maintainability become crucial in IoT application implementation processes. Fault-tolerance which refers runtime fault detection and recovery is considered as a primary method to provide continuous reliable services in IoT environments. This paper introduces a novel IoT fault-tolerance model that offers self-detection and automatic recovery to increase application reliability. Based on this model, an IoT fault-tolerant workflow composition and deployment automation system is proposed to overcome infrastructure level failure in heterogeneous IoT environments. This system leverages a layered architecture and a time-dependent failure model to offer deployment automation and infrastructure recovering. The efficiency and effectiveness of the proposed system is validated and evaluated in a real-world IoT application
    corecore